Computation Model: Logic Gates And Digital Circuits
All systems with computation model: Logic gates and digital circuits
Systems (40)
AMD CDNA
f(x) = High-throughput HPC accelerator families
CDNA (CDNA 1 and CDNA 2) is AMDs compute-focused GPU architecture powering the Instinct MI100 and MI200, building on matrix cores and Infinity Fabric connectivity to link chiplets, HBM stacks, and SIM...
AMD GCN
f(x) = Graphics Core Next compute-and-graphics pipeline
Graphics Core Next architecture is a 28nm AMD GPU microarchitecture powering Radeon HD 7000 series and FirePro W9000 boards; it uses compute units with scalar and vector execution to accelerate graphi...
AMD RDNA
f(x) = real-time graphics rendering and compute acceleration with RDNA compute units
AMD RDNA (Navi 10) is a 7nm architecture with 36 compute units and 32-wide SIMD wavefronts that deliver the rasterization and general-purpose throughput powering Radeon RX 5700-class GPUs, exposing as...
AMD RDNA2
f(x) = Discrete GPU architecture for consumer and professional graphics compute
AMD RDNA2 architecture powers the Radeon RX 6000 series, combines dedicated ray accelerators with AMD Infinity Cache, and balances throughput across gaming and compute workloads.
AMD RDNA3
f(x) = High-end graphics and compute acceleration for Radeon RX 7900 series
AMD RDNA3 uses 5nm+ Navi 3x chiplets with improved shader engines, wider workgroup processors, chiplet-based Infinity Cache, and ray accelerators to deliver higher throughput and lower power for the R...
AMD RDNA4
f(x) = Navi 4x GPU architecture
Upcoming Navi 4x iteration of the RDNA lineage, the AMD RDNA4 architecture, expands its ray accelerator fabric and AI feature suite while remaining a deterministic, irreversible, exact execution platf...
AMD Terascale
f(x) = Unified VLIW5 shader compute and Eyefinity multi-display pipeline
The HD 5000-era Terascale architecture couples 40nm VLIW5 shader arrays with Eyefinity-aware raster and memory subsystems, providing a general-purpose GPU fabric where GPGPU shader arrays power multi-...
ARM Mali G12x
f(x) = GPUs → Others
Latest Valhall-based upper-midrange GPU used in MediaTek Dimensity 9000/9000+ SoCs with big.LITTLE drivers, balancing graphics, display, and AI workloads.
ARM Mali G5x
f(x) = GPUs → Others
The Mali G52/G57 series is a midrange Valhall architecture GPU used in chipsets like the MediaTek Dimensity 820, targeting efficient mobile graphics and compute.
ARM Mali G7x
f(x) = GPUs → Others
ARM Mali G77/G78 high-end mobile GPU lineup powering Exynos 1080 and 2100 with Valhall architecture improvements in throughput, efficiency, and feature set.
ARM Mali G9x
f(x) = AI and image-processing pipelines for flagship mobile SoCs
ARM Mali G90 and G91 GPUs target flagship mobile devices, combining Valhall architecture top-tier GPU rendering with neural front-ends for AI and image processing. MediaTek Dimensity 1200 smartphones ...
ARM Mali T6xx
f(x) = mobile GPU and TV GPU acceleration
First-generation ARM Mali T series (T600 family) graphics clusters as integrated into Exynos 5 Octa and similar SoCs, delivering shader-based mobile and smart-TV graphics pipelines.
AWS Inferentia
f(x) = deep learning inference pipelines
AWS-designed chip for deep learning inference with high throughput and low latency, used in Inferentia-based EC2 Inf1 instances and the Neuron SDK stack.
Apple ANE (Neural Engine)
f(x) = AI accelerator
Apple's Neural Engine inside A-series and M-series SoCs is a dedicated neural processing unit composed of hardware matrix multiply arrays and supporting SRAM/control pipelines that deterministically a...
Apple GPU
f(x) = high-efficiency SoC compute (graphics & neural)
Apple integrated GPU (e.g., in M1/M2) featuring unified memory, tile-based rendering, and tight coherence with the CPU to deliver graphics and neural compute within the SoC.
Cell Broadband Engine
f(x) = high-throughput parallel workloads
The Cell Broadband Engine pairs a Power Processing Element (PPE) with multiple Synergistic Processing Elements (SPEs) to support the PlayStation 3 and supercomputers such as IBM Roadrunner, delivering...
Cerebras WSE (Wafer-Scale Engine)
f(x) = Large-scale deep learning training and inference
The Cerebras Wafer-Scale Engine is a wafer-scale AI accelerator with millions of cores interconnected through a dense on-chip fabric, delivering massive compute for large-scale model training on syste...
Google Sycamore
f(x) = quantum supremacy via random circuit sampling
A 53-qubit superconducting transmon processor built by Google AI Quantum that executed a random quantum circuit sampling task in 2019 to demonstrate quantum supremacy, providing empirical evidence of ...
Google TPU v5
f(x) = High-throughput tensor acceleration for deep learning training and inference
Google's fifth-generation TPU (v5) is a datacenter AI accelerator optimized for massive matrix multiplies; each chip exposes more matrix units than v4, and when assembled into TPU v5 pods it delivers ...
Graphcore IPU
f(x) = machine intelligence workloads
Graphcore's Intelligence Processing Unit (IPU) is a massively parallel AI accelerator composed of thousands of SRAM-backed tile cores linked by an exchange-style interconnect, enabling sparse tensor g...
IBM Condor
f(x) = Unconventional / Quantum
Planned ~1000-qubit superconducting processor from the IBM Quantum roadmap, extending its gate-based quantum systems.
IBM Eagle
f(x) = fault-tolerant superconducting qubit arrays
IBM Eagle superconducting quantum processor (127 transmon qubits) supports gate-based quantum circuits research and Qiskit experimentation toward fault-tolerant architectures.
IBM Osprey (433-qubit gate-based quantum computer)
f(x) = unitary quantum computation / quantum algorithms (Shor factoring, Grover search, VQE)
IBM's Osprey is a 433-qubit superconducting heavy-hexagon processor in IBM Quantum System Two, engineered for Qiskit access and Qiskit Runtime workflows to run unitary circuits across hundreds of qubi...
Imagination PowerVR
f(x) = mobile graphics/AR workloads
Tile-based PowerVR SGX/Rogue GPUs deployed in Apple iPhone/iPad series, featuring deferred rendering pipelines for efficient mobile graphics and AR experiences.
Intel Xe Low Power GPU
f(x) = integrated mobile graphics
Intel Xe Low Power (Xe-LP) GPU powers Tiger Lake and Arc Alchemist mobile SoCs, offering up to 96 execution units (768 vector ALUs) and dedicated media engines including AV1/HEVC encode and decode acc...
Intel Xe-HPG
f(x) = Discrete Intel Arc Alchemist GPU acceleration for gaming, media, and AI workflows.
The Intel Xe-HPG family packages discrete Arc Alchemist GPUs to deliver hardware ray tracing, advanced media encode/decode, and AI acceleration for consumer gaming and creative workloads.
Intel Xe2 GPUs
Upcoming Xe2 architecture is positioned as a next-gen tile-based GPU platform for discrete and data center workloads, extending Intel Xe with larger tiles and AI-ready matrix engines.
IonQ
f(x) = gate-based quantum circuits orchestrated with trapped ions and photonic links
Trapped ion quantum computer hosted in vacuum chambers with photonic interconnects for modular entanglement, delivered over the cloud via IonQ Harmony and Aria.
NVIDIA Ampere GPUs
f(x) = High-throughput floating-point, tensor, and ray-tracing compute for AI/HPC workloads
NVIDIA's Ampere family (GA100, GA102) integrates second-generation ray tracing cores with third-generation tensor cores, powering the A100 data-center accelerator and GeForce RTX 3090 consumer card on...
NVIDIA Blackwell
f(x) = Grace Hopper superchips
Upcoming NVIDIA Blackwell architecture is tuned for inference, pairing DPX and Tensor cores in the Grace Hopper superchip line to accelerate dense matrix and sparse transformer workloads.
NVIDIA Fermi (GF100)
f(x) = Massively-parallel double-precision CUDA compute and graphics shading workloads
NVIDIA Fermi GF100 architecture introduced compute capability 2.x with ECC-protected GDDR5, hardware thread scheduling, and large register files; Tesla C2050 HPC accelerators and GeForce GTX 480 gamin...
NVIDIA Hopper GPU
f(x) = Transformer and dense matrix acceleration
The Hopper family (H100) is NVIDIA's GPU architecture for large-scale transformer training, pairing a new Transformer Engine with CUDA/SIMT cores and tensor cores on a 4nm/5nm FinFET node; HGX H100 ca...
NVIDIA Kepler GPU microarchitecture
f(x) = CUDA compute and high-throughput SIMD workloads
Kepler GK110/GB100 derivatives underpin Tesla K80 and GeForce GTX 780, delivering CUDA compute services with large SMX arrays tuned for both HPC and graphics tasks.
NVIDIA Maxwell
f(x) = SIMT GPU acceleration for graphics and HPC workloads
NVIDIA Maxwell (GM204/GM200) architecture drives energy-efficient graphics and compute, powering GeForce GTX 980 and Tesla M40 with improved power efficiency and mixed-precision throughput.
NVIDIA Pascal
f(x) = GP104 Pascal GPU microarchitecture
NVIDIA Pascal GPUs built on the GP104 architecture deliver high bandwidth memory and compute-intensive blocks, powering Tesla P100 accelerators and GeForce GTX 1080 cards across HPC and graphics workl...
NVIDIA Tesla
f(x) = dense floating-point and tensor algebra for HPC and AI training workloads
NVIDIA Tesla GPU compute cards deliver massively parallel floating-point and tensor acceleration for HPC, AI training, and inference, leveraging NVLink, HBM memory, and CUDA programming to pack thousa...
NVIDIA Turing
f(x) = Hybrid ray tracing and AI inference pipelines
NVIDIA Turing is a GPU microarchitecture that uses dedicated real-time ray tracing RT cores and Tensor cores for deep learning while powering GeForce RTX 2080, Tesla T4, and similar boards.
NVIDIA Volta
f(x) = GV100
NVIDIA Volta GPU architecture built on the GV100 die with tensor cores, HBM2 memory, and NVLink, powering Tesla V100 accelerators and DGX-1 systems.
Qualcomm Hexagon NPU
f(x) = on-device AI inference
Qualcomm Hexagon NPU is the tensor accelerator embedded in Snapdragon platforms, combining Hexagon DSP cores and tensor accelerator fabric to deliver power-efficient on-device inference.
Samsung HBM-PIM
f(x) = Unconventional / In-memory compute
Samsung HBM2 memory with Processing-in-Memory logic for AI, deployed in research prototypes to offload vector-heavy kernels and shorten data movement for next-generation accelerators.